11 - Architectures of Supercomputers [ID:10246]
50 von 645 angezeigt

So, let's start. So, last time we ended the lecture with the Titan supercomputer, and

this time we're going to conclude the list of supercomputers that we are handling by

looking at what came after Titan. So right after Titan came Tianhe2, which stands for

Milky Way, right? Maybe. You don't speak Chinese? I don't speak Chinese. What's the

name of the supercomputer? Tianhe? Stands for Milky Way, right? Tianhe. Okay. So, Milky

Way 2, which is the successor of the Tianhe1 supercomputer. So, the Titan supercomputer

only lasted on the number one spot for half a year and was succeeded by the Titan, by

the Tianhe Milky Way supercomputer, which was Space Station? It's Tianlong, also Space Station,

the Chinese word, which means sky palace or something like that. So, with the supercomputers

or the system components, the Chinese engineers are very creative. Okay, we'll see later.

So, yeah, anyways, it was the number one system from June 2013 to November 2015, so quite

a long time. And yeah, so let's look into it. So, let's look at the high level specs.

So, the raw performance for the Linpack benchmark was 33 petaflops, okay, which was doubling

the performance of the Titan supercomputer, which was, well, massive. And the configuration

is that it has 3.12 million cores, which is quite something as well. It's equipped with

Intel Xeon regular IB bridge processors and Intel Xeon 5 Knights Corner processors. It

has a proprietary interconnect, so custom built just for the system. And the power consumption

is at 18, almost 18.5 megawatts, right, so quite a beast. You can read more about the

system here on those references. So, here, this is a paper from the engineers and from

the director of the Tianhe 2, and they explicitly say that it's Milky Way 2. So, let's look

into it. So, the node architecture is composed such that we have two Intel Xeon IB bridge

processors. So, the regular big CPUs make up 24 cores in total. And together with that,

we have three Intel Xeon 5 processors, right, and each of those processors has 57 cores,

right. So, we end up with a total of 195 cores per node, which is quite something. Each

node then has 64 gigabytes of RAM. And the special thing about this system, well, the

node architecture is that it's more or less built up of off-the-shelf components. So,

you can buy IB bridge processors and you can buy the Xeon 5 processors or coprocessors and

build a system like this yourself, right, well, almost. What's not off-the-shelf is,

of course, the network. And the node architecture looks a little bit like this. So, we have

the two CPUs here, which are connected via the quick pass interconnect. Then we have

four 16 PCI express lanes, right, so where three of them connect to the Xeon 5 coprocessor

and one of them connects to the network interconnect, okay. And that's more or less the network

node architecture. And they call it Neo-heterogeneous architecture. The name comes due to the fact

that the Xeon 5 coprocessors, in contrast to the, to, for example, GPUs, have almost

the same instruction set as a CPU, right. So, if you have a binary that's compiled for

the X86, the X86 instruction set, you can run it on the Xeon 5. And incidentally, on the coprocessor itself,

there's a full Linux system running on top of it, right. Okay, so the only real difference

is that the coprocessor has wider vector lanes, right, so it has wider SIMD instructions and

is four times multi-threaded. And the CPU microarchitecture is an in-order microarchitecture,

which is based on the Intel Atom processor, right. So, that's the special thing here.

The interesting part about this whole design is that you now drive, you now have to drive

three coprocessors, okay, and your two CPU processors. And the researchers at the National

University of Defense and Technology in China, where the system is built, concluded that

the programming models that existed at that time weren't sufficient to actually program

one node of the system. And as such, they invented even their own programming model

to be able to, well, conveniently program the CPUs and the coprocessors, which they

called OpenMC, which stands for Open Mini-Core. Okay, so the other components that you see

here, the PCH and the CPLD are for the monitoring and fault detection, et cetera. So, another

specialty of this supercomputer is that they build a real-time monitoring system around

it where they are able to monitor the health of the system, of the different compute nodes,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:32:54 Min

Aufnahmedatum

2019-01-23

Hochgeladen am

2019-04-04 18:39:22

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen